Search Results: "dkg"

9 January 2013

Daniel Kahn Gillmor: universally accessible storage for the wary user

A friend wrote me a simple question today. My response turned out to be longer than i expected, but i hope it's useful (and maybe other people will have better suggestions) so i thought i'd share it here too: Angela Starita wrote:
I'd like to save my work in a location where I can access it from any computer. I'm wary of using the mechanisms provided by Google and Apple. Can you suggest another service?
Here's my reply: I think you're right to be wary of the big cloud providers, who have a tendency to inspect your data to profile you, to participate in arbitrary surveillance regimes, and to try to sell your eyeballs to advertisers. Caveat: You have to trust the client machine too But it's also worth remembering that the network service provider is not the only source of risk. If you really mean "accessing your data from any computer", that means the computer you're using to access the data can do whatever it wants with it. That is, you need to trust both the operator of these "cloud" services, *and* the administrator/operating system of the client computer you're using to access your data. For example, if you log into any "secure" account from a terminal in a web caf , that leaves you vulnerable to the admins of the web caf (and, in the rather-common case of sloppily-administered web terminals, vulnerable to the previous user(s) of the terminal as well). Option 0: Portable physical storage One way to have your data so that you can access it from "any computer" is to not rely on the network at all, but rather to carry a high-capacity MicroSD card (and USB adapter) around with you (you'll probably want to format the card with a widely-understood filesystem like FAT32 instead of NTFS or HFS+ or ext4, which are only understood by some of the major operating systems, but not all). Here is some example hardware: Almost every computer these days has either a microSD slot or a USB port, while some computers are not connected to the network. This also means that you don't have to rely on someone else to manage servers that keep your data available all the time. Note that going the microSD route doesn't remove the caveat about needing to trust the client workstation you're using, and it has another consideration: You'd be responsible for your own backup in the case of hardware failure. You're responsible for your own backup in the case of online storage too, of course -- but the better online companies are probably better equipped than most of us to deal with hardware failure. OTOH, they're also susceptible to some data loss scenarios that we aren't as individual humans (e.g. the company might go bankrupt, or get bought by a competitor who wants to terminate the service, or have a malicious employee who decides to take revenge). Backup of a MicroSD card isn't particularly hard, though: just get a USB stick that's the same size, and regularly duplicate the contents of the MicroSD card to the USB stick. One last consideration is storage size -- MicroSD cards are currently limited to 32GB or 64GB. If you have significantly more data than that, this approach might not be possible, or you might need to switch to a USB hard disk, which would limit your ability to use the data on computers that don't have a USB port (such as some smartphones). Option 1: Proprietary service providers If you don't think this portable physical storage option is the right choice for you, here are a couple proprietary service providers who offer some flavor of "cloud" storage while claiming to not look at the contents of your data: I'm not particularly happy with either of those, though, in part because the local client software they want you to run is proprietary, so there's no way to verify that they are actually unable to access the contents of your data. But i'd be a lot happier with either wuala or spideroak than i would be with google drive, dropbox, or iCloud. Option 2: What i really want I'm much more excited about the network-accessible, free-software, privacy-sensitive network-based storage tool known as git-annex assistant. The project is spearheaded by Joey Hess, who is one of the most skilled and thoughtful software developers i know of. "assistant" (and git-annex, from which it derives) has the advantage of being pretty agnostic about the backend service (many plugins for many different cloud providers) and allows you to encrypt your data locally before sending it to the remote provider. This also means you can put your encrypted data in more than one provider, so that if one of the providers fails for some reason, you can be relatively sure that you have another copy available. But "assistant" won't be ready for Windows or Android for several months (builds are available for Linux and Mac OS X now), so i don't know if it meets the criterion for "accessible from any computer". And, of course, even with the encryption capabilities, the old caveat about needing to trust the local client machine still applies.

21 December 2012

Daniel Kahn Gillmor: libasound2-plugins is a resource hog!

I run mpd on debian on "igor", an NSLU2 -- a very low-power ~266MHz armel machine, with no FPU and a scanty 32MiB of RAM. This serves nicely to feed my stereo with music that is controllable from anywhere on my LAN. When playing music and talking to a single mpd client, the machine is about 50% idle. However, during a recent upgrade, something wanted to pull in pulseaudio, which in turn wanted to pull in libasound2-plugins, and i distractedly (foolishly) let it. With that package installed, after an mpd restart, the CPU was completely thrashed (100% utilization) and music only played in stutters of 1 second interrupted by a couple seconds of silence. igor was unusable for its intended purpose. Getting rid of pulseaudio was my first attempt to fix the stuttering, but the problem remained even after pulse was all gone and mpd was restarted. Then i did a little search of which packages had been freshly installed in the recent run:
grep ' install .* <none> ' /var/log/dpkg.log
and used that to pick out the offending package. After purging libasound2-plugins and restarting mpd, the igor is back in action. Lesson learned: on low-overhead machines, don't allow apt to install recommends!
echo 'APT::Install-Recommends "0";' >> /etc/apt/apt.conf
And it should go without saying, but sometimes i get sloppy: i need to pay closer attention during an "apt-get dist-upgrade" Tags: alsa, apt, low-power, mpd

6 December 2012

Daniel Kahn Gillmor: set default margins for OpenOffice as a sysadmin?

I'm maintaining a lab of debian squeeze machines that run OpenOffice.org (i'm considering upgrading to LibreOffice from squeeze-backports). I'd like to adjust the default page margins for all users of Writer. Most instructions i've found suggest ways to do this as a single user, but not how to make the change system-wide. I don't want to ask every user of these machines to do this (and i also don't want to tamper with each home directory directly -- that's not something i can maintain reliably). Alas, i can find no documentation about how to change the default page margins system-wide for either Oo.o or LibreOffice. Surely this is something that can be done without a recompile. What am i missing? Tags: configuration, libreoffice, margins, openoffice.org, sysadmin, templates

4 December 2012

Daniel Kahn Gillmor: Error messages are your friend (postgres is good)

Here is a bit of simple (yet subtly-flawed) sql, which produces different answers on different database engines:
0 dkg@pip:~$ cat test.sql
drop table if exists foo;
create table foo (x int, y int);
insert into foo VALUES (1,3);
insert into foo VALUES (1,5);
select y from foo group by x;
0 dkg@pip:~$ sqlite3 < test.sql
5
0 dkg@pip:~$ mysql -N dkg < test.sql
3
0 dkg@pip:~$ psql -qtA dkg < test.sql
ERROR:  column "foo.y" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: select y from foo group by x;
               ^
0 dkg@pip:~$ 
are two of the many reasons postgresql is my database engine of choice.Tags: errors, postgresql, sql

27 November 2012

Daniel Kahn Gillmor: more proprietary workarounds, sigh

In supporting a labful of Debian GNU/Linux machines with NFS-mounted home directories, i find some of my users demand a few proprietary programs. Adobe Flash is one of the most demanded, in particular because some popular streaming video services (like Amazon Prime and Hulu) seem to require it. I'm not a fan of proprietary network services, but i'm happy to see that Amazon Prime takes Linux support seriously enough to direct users to Adobe's Linux Flash "Protected Content" troubleshooting page (Amazon Prime's rival NetFlix, by comparison, has an abysmal record on this platform). Of course, none of this will work on any platform but i386, since the flash player is proprietary software and its proprietors have shown no interest in porting it or letting others port it :( One of the main issues with proprietary network services is their inclination to view their customer as their adversary, as evidenced by various DRM schemes. In two examples, the Flash Player's DRM module appears to arbitrarily break when you use one home directory across multiple machines. Also, the DRM module appears to depend on HAL, which is being deprecated by most of the common distributions. Why bother with this kind of gratuitous breakage? We know that video streaming can and does work fine without DRM. With modern browsers, freely-formatted video, and HTML5 video tags, video just works, and it works under the control of the user, on any platform. But Flash appears to throw up unnecessary hurdles, requiring not only proprietary code, but deprecated subsystems and fiddly workarounds to get it functional. I'm reminded of Mako's concept of "antifeatures" -- how much engineering time and effort went into making this system actually be less stable and reliable than it would have otherwise been? How could that work have been better-directed? Tags: antifeatures, flash, hal, proprietary, streaming

16 June 2012

Vincent Bernat: GPG Key Transition Statement 2012

I am transitioning my GPG key from an old 1024-bit DSA key to a new 4096-bit RSA key. The old key will continue to be valid for some time but I prefer all new correspondance to be encrypted with the new key. I will be making all signatures going forward with the new key. I have followed the excellent tutorial from Daniel Kahn Gillmor which also explains why this migration is needed. The only step that I did not execute is issuing a new certification for keys I have signed in the past. I did not find any search engine to tell me which key I have signed. Here is the signed transition statement (I have stolen it from Zack):
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256,SHA1
I am transitioning GPG keys from an old 1024-bit DSA key to a new
4096-bit RSA key.  The old key will continue to be valid for some
time, but I prefer all new correspondance to be encrypted in the new
key, and will be making all signatures going forward with the new key.
This transition document is signed with both keys to validate the
transition.
If you have signed my old key, I would appreciate signatures on my new
key as well, provided that your signing policy permits that without
reauthenticating me.
The old key, which I am transitional away from, is:
  pub   1024D/F22A794E 2001-03-23
      Key fingerprint = 5854 AF2B 65B2 0E96 2161  E32B 285B D7A1 F22A 794E
The new key, to which I am transitioning, is:
  pub   4096R/353525F9 2012-06-16 [expires: 2014-06-16]
      Key fingerprint = AEF2 3487 66F3 71C6 89A7  3600 95A4 2FE8 3535 25F9
To fetch the full new key from a public key server using GnuPG, run:
  gpg --keyserver keys.gnupg.net --recv-key 95A42FE8353525F9
If you have already validated my old key, you can then validate that
the new key is signed by my old key:
  gpg --check-sigs 95A42FE8353525F9
If you then want to sign my new key, a simple and safe way to do that
is by using caff (shipped in Debian as part of the "signing-party"
package) as follows:
  caff 95A42FE8353525F9
Please contact me via e-mail at <vincent@bernat.im> if you have any
questions about this document or this transition.
  Vincent Bernat
  vincent@bernat.im
  16-06-2012
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAEBCAAGBQJP3LchAAoJEJWkL+g1NSX5fV0P/iEjcLp7EOky/AVkbsHxiV30
KId7aYmcZRLJpvLZPz0xxThZq2MTVhX+SdiPcrSTa8avY8Kay6gWjEK0FtB+72du
3RxhVYDqEQtrhUmIY2jOVyw9c0vMJh4189J+8iJ5HGQo9SjFEuRrP9xxNTv3OQD5
fRTMUBMC3q1/KcuhPA8ULp4L1OS0xTksRfvs6852XDfSJIZhsYxYODWpWqLsGEcu
DhQ7KHtbOUwjwsoiURGnjwdiFpbb6/9cwXeD3/GAY9uNHxac6Ufi4J64bealuPXi
O4GgG9cEreBTkPrUsyrHtCYzg43X0q4B7TSDg27j0xm+xd+jW/d/0AlBHPXcXemc
b+pw09qLOwQWbsd6d4bx22VXI75btSFs8HwR9hKHBeOAagMHz+AVl5pLXo2rYoiH
34fR1HWqyRdT3bCt19Ys1N+d0fznsZNFOMC+l23QyptOoMz7t7vZ6GbB20ExafrW
+gi7r1sV/6tb9sYMcVV2S3XT003Uwg8PXajyOnFHxPsMoX9zsk1ejo3lxkkTZs0H
yLZtUj3iZ3yX9e2yfv3eOxitR4+bIntEbMecnTI9xJn+33QTz/pWBqg9uDosqzUo
UoQtc6WVn9x3Zsi7aneDYcp06ZdphgsyWhgiLIhQG9MAK9wKthKiZv8DqGYDOsKt
WwpQFvns33e5x4SM4KxXiEYEARECAAYFAk/ctyEACgkQKFvXofIqeU5YLwCdFhEL
P7vpUJA2zv9+dpPN5GLfBlcAn0mDGJcjJpYZl/+aXEnP/8cE0day
=0QnC
-----END PGP SIGNATURE-----
For easier access, I have also published it in text format. You can check it with:
$ gpg --keyserver keys.gnupg.net --recv-key 95A42FE8353525F9
gpg: requesting key 353525F9 from hkp server keys.gnupg.net
gpg: key 353525F9: "Vincent Bernat <bernat@luffy.cx>" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1
$ curl http://vincent.bernat.im/media/files/key-transition-2012.txt   \
>       gpg --verify
To avoid signing/encrypting with the old key who share the same email addresses than the new one, I have saved it, removed it from the keyring and added it again. The new key is now first in both the secret and the public keyrings and will be used whenever the appropriate email address is requested.
$ gpg --export-secret-keys F22A794E > ~/tmp/secret
$ gpg --export F22A794E > ~/tmp/public
$ gpg --delete-secret-key F22A794
sec  1024D/F22A794E 2001-03-23 Vincent Bernat <bernat@luffy.cx>
Delete this key from the keyring? (y/N) y
This is a secret key! - really delete? (y/N) y
$ gpg --delete-key F22A794E
pub  1024D/F22A794E 2001-03-23 Vincent Bernat <bernat@luffy.cx>
Delete this key from the keyring? (y/N) y
$ gpg --import ~/tmp/public
gpg: key F22A794E: public key "Vincent Bernat <bernat@luffy.cx>" imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg: 3 marginal(s) needed, 1 complete(s) needed, classic trust model
gpg: depth: 0  valid:   2  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 2u
gpg: next trustdb check due at 2014-06-16
$ gpg --import ~/tmp/secret
gpg: key F22A794E: secret key imported
gpg: key F22A794E: "Vincent Bernat <bernat@luffy.cx>" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1
gpg:       secret keys read: 1
gpg:   secret keys imported: 1
$ rm ~/tmp/public ~/tmp/secret
$ gpg --edit-key F22A794E
[...]
gpg> trust
[...]
Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)
  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu
Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y
I now need to gather some signatures for the new key. If this is appropriate for you, please sign the new key if you signed the old one.

29 May 2012

Daniel Kahn Gillmor: KVM, Windows XP, and Stop Error Code 0x0000007B

i dislike having to run Windows as much as the next free software developer, but like many sysadmins, i am occasionally asked to maintain some legacy systems. A nice way to keep these systems available (while not having to physically maintain them) is to put them in a virtual sandbox using a tool like kvm. While kvm makes it relatively straightforward to install WinXP from a CD (as long as you have the proper licensing key), it is more challenging to transition a pre-existing hardware windows XP installation into a virtual instance, due to Windows only wanting to boot to ide chipsets that it remembers being installed to. In particular, booting a disk image pulled from a soon-to-be-discarded physical disk can produce a Blue Screen of Death (BSOD) with the message:
Stop error code 0x0000007B
or
(INACCESSABLE_BOOT_DEVICE)
This seems like it's roughly the equivalent (in a standard debian GNU/Linux environment) of specifying MODULES=dep in /etc/initramfs-tools/initramfs.conf, and then trying to swap out all the hardware. At first blush, Microsoft's knowledge base suggests doing an in-place upgrade or full repartition and reinstall, which are both fairly drastic measures -- you might as well just start from scratch, which is exactly what you don't want to have to do for a nursed-along legacy system which no one who originally set it up is even with the organization any more. Fortunately, a bit more digging in the Knowledge Base turned up an unsupported set of steps that appears to be the equivalent of setting MODULES=most (at least for the IDE chipsets). Running this on the old hardware before imaging the disk worked for me, though i did need to re-validate Windows XP after the reboot by typing in the long magic code again. i guess they're keying it to the hardware, which clearly changed in this instance. Such silliness to spend time working around, really, when i'd rather be spending my time working on free software. :/

16 March 2012

Daniel Kahn Gillmor: Compromising webapps: a case study

This paper should be required reading for anyone developing, deploying, or administering web applications. It's also interesting to read the perspective of the folks operating the compromised webapp (details are in the section titled "Digital Vote-By-Mail" on pages 34 to 38).

4 March 2012

Jamie McClelland: Managing KVM instances

At May First/People Link we have been using KVM for several years now and recently I have been running KVM instances on my local laptop. I'm pleased to see all the work that has gone into libvirt, which seems like a robust and full-featured suite of tools for managing many virtualization technologies, including KVM. However, we don't use it at May First/People Link for a number of reasons. The most pressing is that it runs all virtual guests as the same user, but also because it offers far more features than we need (such as graphical access to virtual server, which we don't need since none of our guest servers run X). On May First/People Link hosts, we are using a relatively simple set of bash scripts (accessible via git at git://lair.fifthhorseman.net/~dkg/kvm-manager). These scripts re-use many tools we are already familiar with to build and launch kvm guests. Each guests runs as a dedicated non-provileged user, with a console available using screen, and the kvm process is managed using runit. Since our admins are familiar with these tools already, the learning curve involved is much less steep. Despite the relative simplicity of kvm-manager, it was still more complicated and involved than I wanted on my laptop. Additionally, I wanted to fully understand every piece of the puzzle and separating out user privileges wasn't important to me. So - I wrote the following bash script to launch virtual servers. It assumes you are using logical volume manager. Some editing required if you want to re-use it.
#!/bin/bash
# manage a virtual server
# add /etc/udev/rules.d/92-kvm.rules with the following line (change "jamie" 
# to the group you are running as and vg_animal0 to the name of your volume 
# group):
# ACTION=="change", SUBSYSTEM=="block", ATTR dm/name =="vg_animal0-*_root", GROUP="jamie"
#
# install a new server by:
# * create logical volume with the name: servername_root
# * $0 servername start /usr/local/share/ISOs/default.iso
bridge=virbr0
user=jamie
base=/home/jamie
server="$1"
vg=vg_animal0
# modify server memory here
mem=768
command="$2"
[ -z "$command" ] && command=start
die ()  
  printf "$1\n"
  exit 1
 
cdarg=
if [ -n "$3" ]; then
  [ ! -f "$3" ] && die "Third argument should be path to cd iso. Can't find that path."
  cdarg="-cdrom $3"
fi
function start()  
  lvname="$ vg -$ server _root"
  # trigger udev to ensure we have proper ownership of the block device
  sudo udevadm trigger --subsystem-match=block --attr-match=dm/name="$lvname"
  lv="/dev/mapper/$lvname"
  [ ! -e "$lv" ] && die "Can't find $lv"
  sudo modprobe -v tun   die "Failed to modprobe tun module"
  # create network device if it doesn't already exist
  ip tuntap   grep "$tap"   sudo ip tuntap add dev "$tap" mode tap user "$user"   die "Failed to create device $tap"
  # bring up device if it's not already up
  ip link   grep " $tap "   sudo ip link set "$tap" up   die "Failed to set $tap to up"
  # add the device to the bridge so it get use the upstream network connections
  /sbin/brctl show   grep "$tap"   sudo brctl addif "$bridge" "$tap"   die "Failed to add tap to bridge"
  # launch kvm in a screen session
  screen -S "$server" kvm -drive "file=$lv,if=virtio,id=hda,format=raw" -m "$mem" -device "virtio-net-pci,vlan=1,id=net0,mac=$mac,bus=pci.0" -net "tap,ifname=$tap,script=no,downscript=no,vlan=1,name=hostnet0" $cdarg -nographic   die "Failed to start kvm" 
 
function cleanup()  
  read -p "Please shutdown the host first then hit any key to continue..."
  sudo brctl delif "$bridge" "$tap"
  sudo ip link set "$tap" down
  sudo ip tuntap del mode tap dev "$tap"
 
# generate reproducible mac address
mac="$(printf "02:%s" "$(printf "%s\0%s" "$(hostname)" "$ server "   sha256sum   sed 's/\(..\)/\1:/g'   cut -f1-5 -d:)" )"
tap="$ server 0"
case "$command" in
  start)
    start
    ;;
  cleanup)
    cleanup
    ;;
  *)
    die "Please pass start or cleanup as first argument"
esac

6 January 2012

Timo Jyrinki: I have a new GPG key


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1,SHA512

Hello,

I'm transitioning from my 2003 GPG key to a new one.

The old key will continue to be valid for some time, but I eventually
plan to revoke it, so please use the new one from now on. I would also
like this new key to be re-integrated into the web of trust. This message
is signed by both keys to certify the transition.

The old key was:

pub 1024D/FC7F6D0F 2003-07-10
Key fingerprint = E6A8 8BA0 D28A 3629 30A9 899F 82D7 DF6D FC7F 6D0F

The new key is:

pub 4096R/90BDD207 2012-01-06
Key fingerprint = 6B85 4D46 E843 3CD7 CDC0 3630 E0F7 59F7 90BD D207

To fetch my new key from a public key server, you can simply do:

gpg --keyserver pgp.mit.edu --recv-key 90BDD207

If you already know my old key, you can now verify that the new key is
signed by the old one:

gpg --check-sigs 90BDD207

If you don't already know my old key, or you just want to be double
extra paranoid, you can check the fingerprint against the one above:

gpg --fingerprint 90BDD207

If you are satisfied that you've got the right key, and the UIDs match
what you expect, I'd appreciate it if you would sign my key:

gpg --sign-key 90BDD207

Lastly, if you could send me these signatures, i would appreciate it.
You can either send me an e-mail with the new signatures by attaching
the following file:

gpg --armor --export 90BDD207 > timojyrinki.asc

Or you can just upload the signatures to a public keyserver directly:

gpg --keyserver pgp.mit.edu --send-key 90BDD207

Please let me know if there is any trouble, and sorry for the inconvenience.

(this post has been modified from the example at
http://www.debian-administration.org/users/dkg/weblog/48)

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAk8GuZoACgkQgtffbfx/bQ9nqACglWyHnDTFQfdKmz8OCd3oL6iR
hcEAmgKJ7RZsgwxwkRGPhygy5y1Ztb+3iQIcBAEBCgAGBQJPBrmaAAoJEOD3WfeQ
vdIHdVQQAMT1yvIogzbtK6sUnWqwbrXI9pDEFk7AzJTb80R+wzxsw7gu9gcBDk8G
BL2O26GKUqKWA3ytuApSl42FJam/Lusi9npT3XNkmHs6FaBMNuLYrqEXmCwXwWr/
OrLyeeLiF4yxgbNWbv+600BqAWqFlo6NeTgQKsJWtCjR3RVMxX3R8nzjDnKJuF+z
c6+2JKBWyx/HVUKcJpJrFDDR36HRFvVJomTuma2JCQ/RAl9vzAguqNYOi1QkuuQv
EF1gXH7gLifukGuwquP1DHP6SWWkj77jtRWr5ewC0xymbrArzAwKbvMQl3VpKBHh
MmpJjYP3ECyL14AKi/TY2Lidi0Sf6yqFMcPcreoih01N0OU0NXmD4IrHMT24/ssb
okDUe1o3YImjGq1jTACvlzC8s54EfLsqDgSP98SGVpuoDqPJUwVk4nuHj8q0vDSs
qZox26gVwB2FAOUi1BFiZbIzM5rsyYfCGyWUGiAwBFf54lYRAeCDCt8iAOOL1Ov/
TumIGYdLoXnDuOJq1VjXLGx2OFDrpyU8SPGoa3zNEVz39tgxQ48ASJEqcqt7HvBy
IW+TTsMLdJ1Ait9aCM3mzzr1iwP8TrL0qUsdRLOE6AKdAqocIfqXY8OeDKhbUiOJ
CXWk5q3xheK3sDWUXX7J63bAAUH4jFnpQEOVMJKBUNMKsWa0iXDS
=mklN
-----END PGP SIGNATURE-----

26 December 2011

Asheesh Laroia: Short key IDs are bad news (with OpenPGP and GNU Privacy Guard)

Summary: It is important that we (the Debian community that relies on OpenPGP through GNU Privacy Guard) stop using short key IDs. There is no vulnerability in OpenPGP and GPG. However, using short key IDs (like 0x70096AD1) is fundementally insecure; it is easy to generate collisions for short key IDs. We should always use 64-bit (or longer) key IDs, like: 0x37E1C17570096AD1 or 0xEC4B033C70096AD1. TL;DR: This now gives two results: gpg --recv-key 70096AD1

Some background, and my two keys Years ago, I read dkg's instructions on migrating the Debian OpenPGP infrastructure. It told me that the time and effort I had spent getting my key into the strong set wasn't as useful as I thought it had been. I felt deflated. I had put in quite a bit of effort over the years to strongly-connect my key to a variety of signatures, and I had helped people get their own keys into the strong set this way. If I migrated off my old key and revoked it, I'd be abandoning some people for whom I was their only link into the strong set. And what fun it was to first become part of the strong set! And all the eyebrows I raised when I told people I was going meet up with people I met on a website called Biglumber... I even made it my Facebook.com user ID. So if I had to generate a new key, I decided I had better really love the short key ID. But at that point, I already felt pretty attached to the number 0x70096AD1. And I couldn't come up with anything better. So that settled it: no key upgrade until I had a new key whose ID is the same as my old key. That dream has become a reality. Search for my old key ID, and you get two keys!
$ gpg --keyserver pgp.mit.edu --recv-key 0x70096AD1
gpg: requesting key 70096AD1 from hkp server pgp.mit.edu
gpg: key 70096AD1: public key "Asheesh Laroia <asheesh@asheesh.org>" imported
gpg: key 70096AD1: public key "Asheesh Laroia <asheesh@asheesh.org>" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 2
gpg:               imported: 2  (RSA: 1)
I also saw it as an opportunity: I know that cryptography tools are tragically easy to mis-use. The use of 32-bit key IDs is fundamentally incorrect -- too little entropy. Maybe shocking people by creating two "identical" keys will help speed the transition away from this mis-use.

A neat stunt abusing --refresh-keys Thanks to a GNU Privacy Guard bug, it is super easy to get my new key. Let's say that, like many people, you only have my old key on your workstation:
$ gpg --list-keys   grep 70096AD1
pub   1024D/70096AD1 2005-12-28
Just ask GPG to refresh:
$ gpg --keyserver pgp.mit.edu --refresh-keys
gpg: refreshing 1 key from hkp://pgp.mit.edu
gpg: requesting key 70096AD1 from hkp server pgp.mit.edu
gpg: key 70096AD1: public key "Asheesh Laroia <asheesh@asheesh.org>" imported
gpg: key 70096AD1: "Asheesh Laroia <asheesh@asheesh.org>" not changed
gpg: Total number processed: 2
gpg:               imported: 1  (RSA: 1)
gpg:              unchanged: 1
gpg: no ultimately trusted keys found
You can see that it set out to refresh just 1 key. It did that by querying the keyserver for the short ID. The keyserver provided two hits for that query. In the end, GPG refreshes one key and actually imports a new key into the keyring! Now you have two:
$ gpg --list-keys   grep 70096AD1
pub   1024D/70096AD1 2005-12-28
pub   4096R/70096AD1 2011-03-11
There is a bug filed in GNU Privacy Guard about this. It has a patch attached. There is, at the moment, no plan for a new release.

A faster attack, but nothing truly new My friend Venkatesh tells me there is an apocryphal old Perl script that could be used to generate key ID collisions. Here in the twenty-first century, l33t h4x0rz like Georgi Guninski are trying to create collisions. In May 2010, "halfdog" posted a note to the full-disclosure list that generates PGP keys with chosen short key IDs. I haven't benchmarked or tested that tool, but I have used a different tool (private for now) that can generate collisions in a similar fashion. It takes about 3 hours to loop through all key IDs on a dinky little netbook. You don't have to use any of these tools. You can just rent time on an elastic computing service or a botnet, or your own personal computer, and generate keys until you have a match. I think that it's easy to under-estimate the seriousness of this problem: tools like the PGP Key Pathfinder should be updated to only accept 64-bit (or longer) key IDs if we want to trust their output.

My offer: I will make you a key I've been spending some time wondering: What sort of exciting demonstration can I create to highlight that this is a real problem? Some ideas I've had:
  • Publish a private/public key pair whose key ID is the same as Phil Zimmerman's, original author of PGP
  • Publish a private/public key pair whose key ID is the same as Werner Koch's, maintainer of GNU Privacy Guard
  • Publish a set of public keys that mimic the entire PGP strong set, except where I control the private key of all these keys
The last one would be extremely amusing, and would be a hat-tip to some work discussed in Raph Levien's Google Tech Talk about Advogato. For now, here is my offer: If you send me a request signed with a key in the strong set, I will create a 4096-bit RSA public/private key pair whose 32-bit key ID is one greater than yours. So if you are 0x517DD4E4 I will generate 0x517DD4E5. I will post the keys here, along a note about who requested it, and instructions on how to import them into your keyring. (Note: I will politely decline to create a new key whose 32-bit key ID would create a collision; apologies if your key ID is just one away from someone else's.) P.S. The prize for best sarcastic retort goes to Ian Jackson. He said, "I should go and create a lot of keys with your key ID. I'll set the real name to 'Not Asheesh Laroia' so everyone is totally clear about what is going on."

16 December 2011

Jamie McClelland: Privilege Separation

What are the biggest security threats to my laptop? Almost all the software I install is vetted and signed by a member of the Debian team. I run reliable virus software on our mail server. My disk is encrypted and xscreensaver locks my screen after a few minutes of inactivity. What else? The two biggest threats I've recently considered are: web browsing and non-free software or software that doesn't come from Debian (I regularly have to use skype and zimbra, for example). To mitigate these risks, I've configured these programs to run as their own users, thus adding a layer of separation between the programs and my primary user account, which has access to my gpg/ssh keys and email. With a program like zimbra it's fairly easy. I created a zimbra user, added my primary account's public ssh key to /home/zimbra/.ssh/authorized_keys, and add the following stanza to ~/.ssh/config:
Host zimbra 
Hostname localhost 
User zimbra 
ForwardX11 yes
Now I can start zimbra with:
ssh zimbra ./zdesktop-launch
Skype was a little harder, since the skype client has to access the audio system. With pulseaudio, though, it's a snap. I copied /etc/pulse/default.pa to ~/.pulse/ and added the line:
load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1
Then, I added /home/skype/.pulse/client.conf with the contents:
default-server = 127.0.0.1
[Note: Jamie Rollins and dkg have pointed out that this arrangement allows any user on my laptop to send arbitrary data to pulseaudio, running as my primay account. They suggested configuring pulseaudio to listen on a unix-domain socket, and then configuring pulseaudio to only permit access to users in a particular group.] iceweasel is the most complicated. In addition to the pulseaudio trick, I had to make two other allowances. First, there are a lot of processes that launch a web browser in a number of different ways (sometimes asking for a new session, other times adding a tab to an existing one, sometimes passing an URL as an argument, etc). The one that I got the most stuck on was mutt. Sometimes I want to see how an HTML message looks in iceweasel. Via mailcap, mutt creates a temp file with the html content and then launches a web browser to view the file. As can be reasonably expected, this temp file is owned by my primary user, and only read-able by the owner. That means my iceweasel user can't read it. Eventually, I decided the easiest way to deal with these various scenarios was to write a simple bash script to launch my web browser (see below). I registered it via update-alternatives, so most reasonable programs that want to launch a web browser will use it. The second issue is that I use the monkeysphere xul plugin to verify TLS certificates, which requires iceweasel to communicate with my monkeysphere validation agent. My agent runs as my primary user and by default only responds to queries from my primary user. Fortunately, monkeysphere is well-designed and can handle this situation. As you can see from my web launcher script, I pass MONKEYSPHEREVALIDATIONAGENTSOCKET=$MONKEYSPHEREVALIDATIONAGENTSOCKET when calling iceweasel. In addition, I added the following before I exec monkeysphere-validation-agent:
export MSVA_ALLOWED_USERS="iceweasel jamie"
With this simple infrastructure setup, it's possible to easily isolate future programs as well. Lastly... here's the script for launching iceweasel.
#!/bin/bash
stdin=
new_session=no
url=
for arg in "$@"; do
  if <span class="createlink"> &#34;&#36;arg&#34; &#61;&#126; &#94;-- </span>; then 
    if [ "$arg" = "--new-session" ]; then
      new_session=yes
    elif [ "$arg" = "--from-stdin" ]; then
      stdin=yes
    fi
  else
    url="$1"
  fi
done
if [ "$stdin" = "yes" ]; then
  temp=$(mktemp)
  while read line; do
    echo "$line" >> "$temp"
  done
  # it must be readable by the iceweasel user
  chmod 755 "$temp"
  url="file:///$temp"
fi
args=
if [ "$new_session" = "yes" ]; then
  args="--no-remote -ProfileManager"
fi
if [ -n "$url" ]; then
  args="$args '$url'"
fi
ssh iceweasel "MONKEYSPHERE_VALIDATION_AGENT_SOCKET=$MONKEYSPHERE_VALIDATION_AGENT_SOCKET iceweasel $args" &
[ -f "$temp" ] && sleep 5 && rm "$temp"
Comments by email... Hi, http://current.workingdirectory.net/posts/2011/privilege-separation/ has good intentions but afaik it does not improve security much. X applications can sniff your passwords and inject commands to your terminal emulators. I personally use xpra to get a similar solution without the hazards of X. I've been using it for two months now both at work and home. There are still bugs but svn version is getting better all the time. See for example my reply to https://grepular.com/ProtectingaLaptopfromSimpleandSophisticated_Attacks Timo

21 November 2011

Daniel Kahn Gillmor: Adobe leaves Linux AIR users vulnerable

A few months ago, Adobe announced a slew of vulnerabilities in its Flash Player, which is a critical component of Adobe AIR:
Adobe recommends users of Adobe AIR 2.6.19140 and earlier versions for Windows, Macintosh and Linux update to Adobe AIR 2.7.0.1948. [...] June 14, 2011 - Bulletin updated with information on Adobe AIR
However, looking at Adobe's instructions for installing AIR on "Linux" systems, we see that it is impossible for people running a free desktop OS to follow Adobe's own recommendations:
Beginning June 14 2011, Adobe AIR is no longer supported for desktop Linux distributions. Users can install and run AIR 2.6 and earlier applications but can't install or update to AIR 2.7. The last version to support desktop Linux distributions is AIR 2.6.
So on the exact same day, Adobe said "we recommend you upgrade, as the version you are using is vulnerable" and "we offer you no way to upgrade". I'm left with the conclusion that Adobe's aggregate corporate message is "users of desktops based on free software should immediately uninstall AIR and stop using it". If Adobe's software was free, and they had a community around it, they could turn over support to the community if they found it too burdensome. Instead, once again, users of proprietary tools on free systems get screwed by the proprietary vendor. And they wonder why we tend to be less likely to install their tools? Application developers should avoid targeting AIR as a platform if they want to reach everyone.Tags: adobe, proprietary software, security

15 October 2011

Joachim Breitner: Help wanted maintaining link-monitor-applet

Jean-Yves Lefort s link-monitor-applet, an applet for the GNOME panel, is a great tool for those with often-changing and possibly unreliable network connection. It allows me to see, at one glance, whether I can reach the router, the internet and/or my VPN. Furthermore, the latency display tells me whether there is a point in working over SSH right now. I often observe other people, e.g. on conferences, to stare for a while at a non-loading web page, then fire up a terminal window and run random ping commands, while I immediately see that the network is down and can relax.
Unfortunately, Jean-Yves seems to have stopped working on link-monitor-applet and does not reply to bug reports. This was not a big deal as long as I could use the last released version with minor patches of mine, but now gnome-panel 3 is coming up and all applets need porting to GTK3. Yesterday and today I worked on that, and got it so far that the applet is fully functional, although the visual styling is not perfect yet, because I m not sure how to do what Jean-Yves did with GtkStyle and GdkGC now with GtkStyleContext and Cairo. The code is in the gnome3 branch of my git repository.
For this issue, and any further maintenance work, I m looking for someone who is willing to take upstream responsibility for link-monitor-applet, e.g. making sure it works with the latest GNOME (well, GNOME in fallback mode) and that bug reports are handled. The code base is not too big, and both the non-standard build system and the gob-enhanced C code are fine to work with. I ll still be happy to serve as the maintainer of the Debian package of link-monitor-applet.

Flattr this post

9 August 2011

Philipp Kern: DebConf11: Gobby documents

If you still want to grab documents that used to be on gobby.debian.net:

21 July 2011

Stefano Zacchiroli: Test Driven Development in Debian

... or TDDD and DEP8 context As a nice byproduct of the huge "rolling" discussion we had back in April/May, various people have brainstormed about applying Test-Driven Development (TDD) techniques to Debian development. Here is a brief summary of some opinions on the matter: ... and hey, they've also coined the cool TDDD acronym, which I hereby took the freedom to re-target to Test-Driven Development in Debian. Having a cool acronym, we are already half-way to actually having the process up and running *g*. more testing I believe Debian needs more testing and I've been advocating for that since quite a while e.g. at DebConf10, as one of the main goals we should pursue in the near future. Of course advocating alone is not enough in Debian to make things happen and that is probably why this goal has been (thus far) less successful that others we have put forward, such as welcoming non-packaging contributors as Debian Project members. There are important reasons for increasing testing in Debian. quality assurance Quality Assurance has always been, and still is, one of the distinctive traits of Debian. I often say that Debian has a widespread culture of technical excellence and that is visible in several places: the Debian Policy process, lintian, piuparts, periodic full archive rebuilds, the EDOS/Mancoosi QA tools, the cultural fact that maintainers tend to know a lot about the software they're packaging rather than only about packaging, the we release when it's ready mantra, etc. But caring about quality is not a boolean, it's rather something that should be continuously cherished, refining quality requirements over time. By simply maintaining the status quo in its QA tools and processes, Debian won't remain for long a distribution who could claim to care about package quality. Others will catch up and are in fact already doing that. In particular, we surely have room for improvements in our quality tools and processes for: reducing inertia Inertia is a recurring topic among Debian lovers (and haters). It is often argued how difficult it is to make changes in Debian, both small and large, due to several (alleged) hindrances such as the size of the archive, the number of ports, the number of maintainers that should agree before proceeding with a cross-package change, etc. It's undeniable that the more code/ports/diversity you have, the more difficult is to apply "global" changes. But at least for what concerns the archive size, I believe that for the most part it's just FUD: simply debunking the self-inflicted culture about how "dangerous" doing NMUs is might go and has already gone, imho a long way to fight inertia. Adding per-package and integration tests will make us go another long way in reducing inertia when it comes to performing archive-wide changes. Indeed if a package you are not entirely familiar with has extensive test suites, and if they still pass after your changes, you can be more confident in your changes. The barrier to contribution, possibly via NMU, gets reduced as a result. And if your change will turn out to be bad but still not spot by the test suites, then you can NMU (or otherwise contribute) again to extend the test suite and make the life easier for future contributors to that package. It smells a lot like an useful virtuous cycle to me. autopkgtest / DEP8 how you can help Of all the above, the topic that intrigues me the most is as-installed package testing. Work on that front has been started a few years ago by Ian Jackson when he was working for Canonical. The status quo is embodied by the autopkgtest package. At present, the package contains of various tools and the following two specs:
  1. README.package-tests provides a standard format to declare per-package tests using the new debian/tests/control file. Tests come as executable files which will be run by the adt-run tool in a testbed where the package(s) to be tested is already installed. This part of the specs has been reified as DEP8 which I'm (supposedly) co-driving with Iustin Pop and Ian (for well-deserved credits).
  2. README.virtualisation-sever describes the interface among the test runner and the testbed. A nice separation is provided among the runner and the testbed, enabling different testbed environments with a varying degree of isolation: you can have a "null" testbed which runs tests on your real machines (needless to say, this is highly discouraged, but is provided by the adt-virt-null tool), a chroot testbed (adt-chroot), or a XEN/LVM based testbed (adt-virt-xenlvm).
The specs allow for several runtime testing scenarios and look quite flexible. The tools on the other hand suffer a bit of bitrot, which is unsurprisingly given they haven't been used much for several years. At the very minimum the following Python development tasks are in need of some love: If you are both interested in TDDD and grok Python, the above and many others tasks might whet your appetite. If this is the case don't hesitate to contact me, I'll be happy to provide some guidance.
Note: this post is the basis for the TDDD BoF that I will co-host with Tom Marble at DebConf11. If you plan to come, we will appreciate your thoughts on this matter as well as your help in getting the autopkgtest toolchain up and running again.

20 July 2011

Aigars Mahinovs: Debconf11 the arrival and Debcamp

Short version arrival was quite easy, the arrival wiki page is now up-to-date and my photos are uploaded every day to this Flickr set. I had a small problem departing from CDG to Zagreb after scanning my suit twice and complaining that it had too many electronic devices, they had me remove my Nexus S, Nokia N950 and Kindle 3 from the suit pockets and then scanned that all lot again. After that another security guy asked me to open my suitcase and started squishing my Ligo cheese and declaring in a grave voice that it was too soft to be legal . As I kept calm and explained the DebConf cheese party to him, he looked around, put the cheese back in the bag and said ok, ok, off you go . So I still have that softness for the party. I heard an even better anecdote from our next years hosts when they were travelling trough Germany, a border official asked a typical so, what are you here for? question. They answered that they were in transit for an IT conference and gave him the invitation letter. He read the letter, nodded and then said: DebConf, huh? So, who was the author of the Linux kernel? and when after the initial shock they gave the right answer, they were immediately waved trough. Now that s what I call good border security. :) Meeting point - cafe at the Zagreb airport I met up with the guys in the Zagreb airport caf , which was immediately after exiting the secure zone. We waited for dkg to arrive and get his baggage. The passport check too a couple of minutes, but the baggage claim looked to be taking almost a half an hour. The airport bus After that was sorted out, we took out some local money in one of the 5 ATMs around and went to the bus. We were waived to just put our bags into the bus luggage space and get on, the driver went around to sell us tickets just before we drove off. It was one of the most comfortable and well air-conditioned buses I have been on in a long time. We were in the Zagreb bus station in less than half an hour. Non-official bus that tried to lure us Then a funny part started an old man came up to dkg and immediately asked Are you going to Banja Luka? , and when he said yes, the man started dragging us along to the bus station saying that there is a bus to there that will leave in a couple minutes, but we were in luck, so we should follow him right to the bus. Driven by curiosity, we went on. The man dragged us to the far side of the bus terminal bus stops where there was a minibus with Bosnian number plates, but no official designation as a passenger transit bus and no other passengers. They asked a reasonable 120 kuna for the trip, but they looked very shady to me, so we refused. I think that if we agreed and we were lucky enough that they actually did get us to Banja Luka, the price would have risen along the way, for example using the highway tolls as an excuse. Bus ticket to Banja Luka After getting cursed at by the old man and being demanded to buy him a beer for his efforts, we went upstairs to the real bus station where we were told that there were two buses to Banja Luka: one at 15:00 and another at 16:30, but that those two buses arrive in Banja Luka almost at the same time, so we got us tickets for the latter bus and went to a local bar to get some food and drinks. I somehow managed to get sausages in deepfried bread that for some reason cost almost triple of a hamburger and fries that others got, but it sure helped to pass the time and after all those 9 was not a bad price for a good meal. Bus to Banja Luka at Zagreb bus station We were a bit surprised by the bus drivers asking us to pay 1 (or 8 kuna) for bags to be put into bus bag storage, especially as we for several minutes could not quite understand what they want of us as neither of them spoke either English, Russian or German. Border crossing I don t remember when was the last time I crossed a real (i.e. non-Shengen) border in a bus, so getting out of the bus and walking one-by-one to the border official was a new experience. On the other border of two bus drivers collected the passports an then brought them back later, so that was less painful. And by then we were in Bosnia! And of the things that greeted us was a billboard for the casino in Banja Luka Hotel Bosna. :) By the time we got to Banja Luka, dkg had already made friends with an official from Croatian ministry of regional development that sat behind us on the bus and knew English very well (after living in Canada for some time). The bus station in Banja Luka looked almost abandoned like the bus stopped in the field with only a few taxi drivers and a bunch of buildings some distance away showing that this was indeed part of civilization. Banja Luka taxis The taxi drivers would take you to the venue for 8-10 KM 5 or 50-100 kuna (if their meter is broken and you look to be really ignorant of the exchange rates). If you walk on a bit, there is a bus station with an ATM taking Visa cards and a bit further on a bus stop that will take you to the centre of the city for 1.5 KM. The train station looked fully abandoned when we went there, only a news kiosk was working there. The bus driver reacted well to the name Hotel Bosna and wave toward it when we passed by it. Central bus stop - get off here Meeting first Debian peoples at the hotel It is hard to miss, especially the blinking Casino sign or the white pillars with black letters Hotel Bosna on top. You just get off at the bus station Central and walk back a couple hundred meters. Then see all you Debian friends drinking in the cool bar and walk around the corner to the main entrance of the hotel to check in. Church at sunset Main event location Hotel room The rooms are looking really great this is the best DebConf location that I have been to (I was not in Argentina). Soft beds with linens changed daily, nice air conditioning, towels (changed daily, if used), good shower, It is lacking in the speed and power of its free WiFi and in some place there is a clear impression that the hotel was built as a top of the line hotel a couple decades ago, but somehow has fallen into a disrepair since. Or it is simply used much more than expected. IMG_4973 So far I would say that the DebCamp is running full steam two hacklabs have desks, chairs, power and network connection, one of them even has sufficient air conditioning. The network works great for me so far, at least in the venue. The hotel wireless is barely usable to read a blog post, preferably without photos. Dinner buffet The food is being served well and on time. So far for lunch and dinner you walk up to the reception and get a food ticket based on your room number and go to the M level for the food (the M level is between the ground floor, labelled as PR , and first floor) where for breakfast and dinner there is a buffet, but for lunch you sit at a table and they bring you food. Vegetarians should clearly say vegetarian while pointing at themselves to get the special vegetarian option. So far food has been very good, except that that they often run out of popular items, such as fries, in the buffet. You can also ask for beer and other extra drinks with your food, but know that they cost extra up to 2.5KM for a beer, for example.

29 June 2011

Stefano Zacchiroli: debian society track at debconf

Debian/Society at DebConf11 For DebConf11 which, needless to say, I'll be attending I've accepted to coordinate the Debian/Society track, together with dkg. As it's Daniel who has done most of the work up to now, let me pretend I'm catching up with a brief but very important call for papers. The idea of the Debian/Society track is to collect talks and other events that explore two society-related aspects of Debian:
  1. Debian is a society in itself. It is a society formed by a group of people that are connected by relationships, share a (virtual) territory, and are subject to various kinds of self-imposed rules (technical, political, etc.)
  2. Debian relates with larger human societies and has an impact on them, mainly but not only through the Free Software artifacts that Debian produces and distributes.
Several talks and events that have already been submitted to DebConf11 do a very good job at presenting one or both of the above aspects. But we are looking for a few more. In particular, we would like to receive submissions of events about: If you think you can host the event we are looking for, just head to penta and submit your event providing all associated information (event kind, title, abstract, etc.). Please e-mail me or dkg when you've done so. We're way past the general event submission deadline, but if your proposal is just what we need to fill out the track, we can lobby the Talks Team for its acceptance (we can't guarantee acceptance, though). Daniel Kahn Gillmor
Stefano Zacchiroli

21 June 2011

Daniel Kahn Gillmor: unreproducible buildd test suite failures

I've been getting strange failures on some architectures for xdotool. xdotool is a library and a command-line tool to allow you to inject events into an existing X11 session. I'm trying to understand (or even to reproduce) these errors so i can fix them. The upstream project ships an extensive test suite; this test suite is failing on three architectures: ia64, armel, and mipsel; it passes fine on the other architectures (the hurd-i386 failure is unrelated, and i know how to fix it). The suite is failing on some "typing" tests -- some symbols "typed" are getting dropped on the failing architectures -- but it is not failing in a repeatable fashion. You can see two attempted armel builds failing with different outputs: The first failure shows [ and occasionally < failing under a us,se keymap (that is, after the test-suite's invocation of setxkbmap -option grp:switch,grp:shifts_toggle us,se):
Running test_typing.rb
Setting up keymap on new server as us
Loaded suite test_typing
Started
...........F..
Finished in 19.554214 seconds.
  1) Failure:
test_us_se_symbol_typing(XdotoolTypingTests)
    [test_typing.rb:58:in  _test_typing'
     test_typing.rb:78:in  test_us_se_symbol_typing']:
<" 12345678990-=~ !@\#$%^&*()_+[] ;:\",./<>?:\",./<>?"> expected but was
<" 12345678990-=~ !@\#$%^&*()_+] ;:\",./>?:\",./<>?">.
14 tests, 14 assertions, 1 failures, 0 errors
The second failure, on the same buildd, a day later, shows no failures under us,se, but several failures under other keymaps:
Running test_typing.rb
Setting up keymap on new server as us
Loaded suite test_typing
Started
..F.F.F.......
Finished in 16.784192 seconds.
  1) Failure:
test_de_symbol_typing(XdotoolTypingTests)
    [test_typing.rb:58:in  _test_typing'
     test_typing.rb:118:in  test_de_symbol_typing']:
<" 12345678990-=~ !@\#$%^&*()_+[] ;:\",./<>?:\",./<>?"> expected but was
<" 12345678990-=~ !@\#$%^&*()_+] ;:\",./<>?:\",./<>?">.
  2) Failure:
test_se_symbol_typing(XdotoolTypingTests)
    [test_typing.rb:58:in  _test_typing'
     test_typing.rb:108:in  test_se_symbol_typing']:
<" 12345678990-=~ !@\#$%^&*()_+[] ;:\",./<>?:\",./<>?"> expected but was
<" 12345678990-=~ !@\#$%^&*()_+[] ;:\",./<>?:\",./<>?">.
  3) Failure:
test_se_us_symbol_typing(XdotoolTypingTests)
    [test_typing.rb:58:in  _test_typing'
     test_typing.rb:88:in  test_se_us_symbol_typing']:
<" 12345678990-=~ !@\#$%^&*()_+[] ;:\",./<>?:\",./<>?"> expected but was
<" 12345678990-=~ !@\#$%^&*()_+ ;:\",./>?:\",./<>?">.
14 tests, 14 assertions, 3 failures, 0 errors
I've tried to reproduce on a cowbuilder instance on my own armel machine; I could not reproduce the problem -- the test suites pass for me. I've asked for help on the various buildd lists, and from upstream; no one resolutions have been proposed yet. I'd be grateful for any suggestions or hints of things i might want to look for. It would be a win if i could just reproduce the errors. Of course, one approach would be to disable the test suite as part of the build process, but it has already helped catch a number of other issues with the upstream source. It would be a shame to lose those benefits. Any thoughts?

10 May 2011

Daniel Kahn Gillmor: the bleeding edge: btrfs (poor performance, alas)

I'm playing with btrfs to get a feel for what's coming up in linux filesystems. To be daring, i've configured a test machine using only btrfs for its on-disk filesystems. I really like some of the ideas put forward in the btrfs design. (i'm aware that btrfs is considered experimental-only at this point). I'm happy to report that despite several weeks of regular upgrade/churn from unstable and experimental, i have yet to see any data loss or other serious forms of failure. Unfortunately, i'm not impressed with the performance. The machine feels sluggish in this configuratiyon, compared to how i remember it running with previous non-btrfs installations. So i ran some benchmarks. The results don't look good for btrfs in its present incarnation. UPDATE: see the comments section for revised statistics from a quieter system, with the filesystems over the same partition (btrfs is still much slower). The simplified test system i'm running has Linux kernel 2.6.39-rc6-686-pae (from experimental), 1GiB of RAM (no swap), and a single 2GHz P4 CPU. It has one parallel ATA hard disk (WDC WD400EB-00CPF0), with two primary partitions (one btrfs and one ext3). The root filesystem is btrfs. The ext3 filesystem is mounted at /mnt I used bonnie++ to benchmark the ext3 filesystem against the btrfs filesystem as a non-privileged user. Here are the results on the test ext3 filesystem:
consoleuser@loki:~$ cat bonnie-stats.ext3 
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
loki          2264M   331  98 23464  11 10988   4  1174  85 39629   6 130.4   5
Latency             92041us    1128ms    1835ms     166ms     308ms    6549ms
Version  1.96       ------Sequential Create------ --------Random Create--------
loki                -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  9964  26 +++++ +++ 13035  26 11089  27 +++++ +++ 11888  24
Latency             17882us    1418us    1929us     489us      51us     650us
1.96,1.96,loki,1,1305039600,2264M,,331,98,23464,11,10988,4,1174,85,39629,6,130.4,5,16,,,,,9964,26,+++++,+++,13035,26,11089,27,+++++,+++,11888,24,92041us,1128ms,1835ms,166ms,308ms,6549ms,17882us,1418us,1929us,489us,51us,650us
consoleuser@loki:~$ 
And here are the results for btrfs (on the main filesystem):
consoleuser@loki:~$ cat bonnie-stats.btrfs 
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
loki          2264M    43  99 22682  17 10356   6  1038  79 28796   6  86.8  99
Latency               293ms     727ms    1222ms   46541us     504ms   13094ms
Version  1.96       ------Sequential Create------ --------Random Create--------
loki                -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  1623  33 +++++ +++  2182  57  1974  27 +++++ +++  1907  44
Latency             78474us    6839us    8791us    1746us      66us   64034us
1.96,1.96,loki,1,1305040411,2264M,,43,99,22682,17,10356,6,1038,79,28796,6,86.8,99,16,,,,,1623,33,+++++,+++,2182,57,1974,27,+++++,+++,1907,44,293ms,727ms,1222ms,46541us,504ms,13094ms,78474us,6839us,8791us,1746us,66us,64034us
consoleuser@loki:~$ 
As you can see, btrfs is significantly slower in several categories: I'm hoping that i just configured the test wrong somehow, or that i've done something grossly unfair in the system setup and configuration. (or maybe i'm mis-reading the bonnie++ output?) Maybe someone can point out my mistake, or give me pointers for what to do to try to speed up btrfs. I like the sound of the features we will eventually get from btrfs, but these performance figures seem like a pretty rough tradeoff.Tags: benchmarks, bonnie, btrfs, ext3

Next.

Previous.